The Prompting Hierarchy: From Instructions to Logic
Prompting has evolved from simple command-based inputs to sophisticated Reasoning Architectures that guide a model's internal processing path.
Core Concepts
- Zero-shot Prompting: Providing a task description without any examples (e.g., "Translate this to French").
- Few-shot Prompting: Using "Demonstrations" (input-output pairs) to define the label space and desired format.
- Chain-of-Thought (CoT): A prompting technique that encourages the model to produce intermediate reasoning steps.
- Emergent Properties: Complex reasoning is not explicitly programmed but "emerges" in models typically exceeding 10B parameters.
The Reasoning Shift
- Instruction-Following: Direct mapping of input to output.
- In-Context Learning: Learning patterns from provided examples (Few-shot).
- Logical Decomposition: Breaking problems into sequential steps (CoT).
- Process Supervision: Prioritizing the accuracy of the "thinking" steps over the final answer (as seen in OpenAI o1).
Key Insight
Model performance in few-shot scenarios is highly sensitive to the distribution of labels and the relevance of demonstrations, rather than just the quantity of examples.
TERMINAL
bash — 80x24
> Ready. Click "Run" to execute.
>
Question 1
Which method relies on providing "demonstrations" to guide the model?
Question 2
True or False: Chain-of-Thought reasoning is a capability found in almost all AI models regardless of size.
Challenge: Optimizing Logic Puzzles
Scenario: Optimize a prompt for a model that is struggling with a logic puzzle.
You are using an LLM to solve the following puzzle: "A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?"
Currently, you are passing the prompt exactly as written above, and the model incorrectly answers "$0.10".
Currently, you are passing the prompt exactly as written above, and the model incorrectly answers "$0.10".
Task 1
Identify if the current prompt is Zero-shot or Few-shot.
Solution:
The current prompt is Zero-shot because it provides the task description without any prior examples or demonstrations of similar solved puzzles.
The current prompt is Zero-shot because it provides the task description without any prior examples or demonstrations of similar solved puzzles.
Task 2
Inject the Zero-shot CoT trigger phrase to improve reasoning accuracy. Rewrite the prompt.
Solution:
"A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? Let's think step by step:"
"A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? Let's think step by step:"